Infrastructure
State of the Union
When I originally joined, the process for an animator to add animation to the online web player was a very lengthy process, which involved naming their file exactly right, making sure there were no namespaces or extra objects in the fbx, and pushing it over sourcetree into the Unity repository. From that point they (or another person) had to build the files into asset bundles in Unity, exit scripts maintained in a second git repo to include an array of the names of the asset bundles they wanted to upload. A utility file also had to be edited to include the web token and name of the tenant being uploaded to. The asset bundles uploaded to each tenant were represented as singletons, and often I would get bugs filed because the users expected one set of behavior from assets that were performing differently on another tenant than they expected - when really the issue was just that the asset was out of date and needed to be reuploaded. All source Maya files were backed up using perforce.
It was a very complicated process, which realistically took two people (I was never able to successfully train an animator to go through the process without introducing human error).
The upload solution
I worked closely with Product and Engineering to adjust the system. The first thing eng did was move the build to the cloud, and set it to trigger when anything was pushed through sourcetree on the art repo.
Assets were tagged by default as either public or private (for example, all body animations are public, all facial animations are private), and instead of uploading to many tenants, we switched to the assets being uploaded to a single tenant. This introduced a paradigm shift, where instead of customers having their own web tenant, they subscribed to assets.
Whether a customer had access to a specific asset or not was shifted to a web site, which our customer facing Solutions org was put in charge of.
Our storage backend
Those changes removed the need for a third party going through a painful and lengthy process on behalf of our artists and animators every time they wanted to update anything, but didn't take care of the fact that Perforce is a fickle beast that everyone seemed to have problems with.
We sat down and identified that our core needs out of a backend system were:
- Ease of use for non-technical people
- File locking for multi user access
- Cloud storage, with local access possible
- Mapping to our local drive (to tie into our game engine pipeline)
- Preview and commenting on videos
After having some deep conversations we determined that, while version control is nice for backups, it isn't essential for artists in the way it is for programmers. Based on the above "essential" list, we ended up choosing Dropbox, with the Dropbox Replay plugin. Replay lets you scrub videos, leave comments, and draw, which was a good point of connection when going through video review with our Creative Directors.
Our render situation
One of the bigger challenges with high quality video production was rendering. Our laptops really weren't up to it. We did a fairly full assessment, reaching out to a company that hosted rental computers for CG graphics and doing a trial week - but ultimately at the price point they were charging it was just cheaper to buy a computer which we could remote into.
We went with these specs:
CPU: AMD Ryzen 9 7950X3D or Intel Core i9-13900K
GPU: RTX 4090
RAM: 64gb (DDR4 or DDR5)
Storage - System: 2TB SSD
Storage - Data: 5TB SSD (Optional)
And used Parsec to remove the latency on the remote desktop.